Recently, many deep learning based beamformers have been proposed for multi-channel speech separation. Nevertheless, most of them rely on extra cues known in advance, such as speaker feature, face image or directional information. In this paper, we propose an end-to-end beamforming network for direction guided speech separation given merely the mixture signal, namely MIMO-DBnet. Specifically, we design a multi-channel input and multiple outputs architecture to predict the direction-of-arrival based embeddings and beamforming weights for each source. The precisely estimated directional embedding provides quite effective spatial discrimination guidance for the neural beamformer to offset the effect of phase wrapping, thus allowing more accurate reconstruction of two sources' speech signals. Experiments show that our proposed MIMO-DBnet not only achieves a comprehensive decent improvement compared to baseline systems, but also maintain the performance on high frequency bands when phase wrapping occurs.
translated by 谷歌翻译
最近基于神经网络的到达方向(DOA)估计算法在未知数的声源场景上表现良好。这些算法通常是通过将多通道音频输入映射到单个输出(即所有来源的总空间伪谱(SP))来实现的,称为MISO。但是,这种误语算法在很大程度上取决于经验阈值设置和声音源之间的角度大于固定角度的角度假设。为了解决这些局限性,我们提出了一种新型的多通道输入和多个输出的DOA网络,称为MIMO-DOANET。与一般的误觉算法不同,Mimo-Doanet借助于信息的空间协方差矩阵预测了每个声源的SPS编码。通过这样做,检测声源数量的阈值任务成为检测每个输出中是否存在声音源的更容易的任务,并且在推理阶段,声源之间的严重交互消失。实验结果表明,与3,4个来源场景中的莫斯科基线相比,MIMO-DOANET的相对增长18.6%和绝对13.3%,相对34.4%和绝对20.2%的F1得分提高。结果还证明了Mimo-Doanet减轻了阈值设置问题,并有效地解决了角度假设问题。
translated by 谷歌翻译
双重编码器结构成功地利用了两个特定语言的编码器(LSE)进行代码转换语音识别。由于LSE由两个预训练的语言特定模型(LSM)初始化,因此双编码器结构可以利用足够的单语言数据并捕获单个语言属性。但是,现有方法对LSE的语言没有限制,并且不足以针对LSM的语言知识。在本文中,我们提出了一种特定语言的特征辅助(LSCA)方法来减轻上述问题。具体来说,在培训期间,我们引入了两种特定语言的损失作为语言限制,并为其生成相应的语言目标。在解码过程中,我们通过组合两个LSM和混合模型的输出概率来考虑LSM的解码能力,以获得最终预测。实验表明,LSCA的训练或解码方法可以改善模型的性能。此外,通过组合LSCA的训练和解码方法,最佳结果可以在代码切换测试集上获得多达15.4%的相对误差。此外,该系统可以通过使用我们的方法来很好地处理代码转换语音识别任务,而无需额外的共享参数,甚至可以基于两个预训练的LSM进行重新训练。
translated by 谷歌翻译
声源本地化旨在从观察到的多通道音频寻求所有声源的到达方向(DOA)。对于未知数量来源的实际问题,现有的本地化算法试图预测基于似然的编码(即空间频谱),并采用预先确定的阈值来检测源编号和相应的DOA值。但是,这些基于阈值的算法不稳定,因为它们受到仔细选择阈值的限制。为了解决此问题,我们提出了一种称为ISSL的迭代声源本地化方法,该方法可以迭代地提取每个源的DOA而无需阈值,直到满足终止标准为止。与基于阈值的算法不同,ISSL设计基于二进制分类器的活动源检测器网络,以接受残留的空间频谱并决定是否停止迭代。通过这样做,我们的ISSL可以处理任意数量的来源,甚至超过培训阶段中看到的来源数量。实验结果表明,与现有的基于阈值的算法相比,我们的ISSL在DOA估计和源数检测方面都取得了重大的性能提高。
translated by 谷歌翻译
随着物联网(IoT),边缘计算和云计算的普及,正在开发越来越多的流分析应用程序,包括在物联网传感数据之上的实时趋势预测和对象检测。一种流行的流分析类型是基于重复的神经网络(RNN)基于深度学习模型的时间序列或序列数据预测和预测。与假设数据提前可用并且不会更改的传统分析不同,流分析涉及正在连续生成的数据,并且数据趋势/分布可能会发生变化(又称概念漂移),这将导致预测/预测准确性下降时间。另一个挑战是为流分析找到最佳的资源提供,以达到良好的总体延迟。在本文中,我们研究了如何使用称为长期记忆(LSTM)的RNN模型来最佳利用边缘和云资源,以获得更好的准确性和流式分析。我们为混合流分析提出了一个新颖的边缘云集成框架,该框架支持云上边缘和高容量训练的低潜伏期推断。为了实现灵活的部署,我们研究了部署混合学习框架的不同方法,包括以边缘为中心,以云为中心和边缘云集成。此外,我们的混合学习框架可以根据历史数据进行预训练的LSTM模型,并根据最新数据定期重新训练LSTM模型的推理结果。使用现实世界和模拟流数据集,我们的实验表明,在延迟方面,提出的Edge-Cloud部署是所有三种部署类型中最好的。为了准确性,实验表明我们的动态学习方法在所有三种概念漂移方案的所有学习方法中都表现出最好的作用。
translated by 谷歌翻译
手写的数学表达式识别旨在自动生成来自给定图像的乳胶序列。目前,基于注意的编码器 - 解码器模型被广泛用于此任务。它们通常以左右(L2R)方式生成目标序列,留下左右(R2L)上下文未分发。在本文中,我们提出了一种基于聚合的双向互访网络(ABM),其包括一个共享编码器和两个并行逆解码器(L2R和R2L)组成。通过相互蒸馏增强了两个解码器,其涉及每个训练步骤的一对一知识转移,从而充分利用来自两个反向的互补信息。此外,为了处理各种规模的数学符号,提出了注意聚合模块(AAM)以有效地集成了多尺度覆盖关注。值得注意的是,在推理阶段,考虑到模型已经从两个反向方向学习知识,我们只使用L2R分支推断,保持原始参数大小和推断速度。广泛的实验表明,我们的拟议方法在2016年克罗欧2014年达到56.85%的识别准确性,52.92%,在克罗欧2019年的53.96%,没有数据增强和模型集合,大大优于最先进的方法。源代码可在补充材料中获得。
translated by 谷歌翻译
Traffic accident prediction in driving videos aims to provide an early warning of the accident occurrence, and supports the decision making of safe driving systems. Previous works usually concentrate on the spatial-temporal correlation of object-level context, while they do not fit the inherent long-tailed data distribution well and are vulnerable to severe environmental change. In this work, we propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training. In particular, the text description provides a dense semantic description guidance for the primary context of the traffic scene, while the driver attention provides a traction to focus on the critical region closely correlating with safe driving. CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module. We leverage the attention mechanism in these modules to explore the core semantic cues for accident prediction. In order to train CAP, we extend an existing self-collected DADA-2000 dataset (with annotated driver attention for each frame) with further factual text descriptions for the visual observations before the accidents. Besides, we construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames (named as CAP-DATA) together with labeled fact-effect-reason-introspection description and temporal accident frame label. Based on extensive experiments, the superiority of CAP is validated compared with state-of-the-art approaches. The code, CAP-DATA, and all results will be released in \url{https://github.com/JWFanggit/LOTVS-CAP}.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译